Goto

Collaborating Authors

 ethics board


DeepMind's CEO Helped Take AI Mainstream. Now He's Urging Caution

TIME - Tech

Demis Hassabis stands halfway up a spiral staircase, surveying the cathedral he built. The DNA sculpture, spanning three floors, is the centerpiece of DeepMind's recently opened London headquarters. It's an artistic representation of the code embedded in the nucleus of nearly every cell in the human body. "Although we work on making machines smart, we wanted to keep humanity at the center of what we're doing here," Hassabis, DeepMind's CEO and co-founder, tells TIME. This building, he says, is a "cathedral to knowledge." Each meeting room is named after a famous scientist or philosopher; we meet in the one dedicated to James Clerk Maxwell, the man who first theorized electromagnetic radiation. "I've always thought of DeepMind as an ode to intelligence," Hassabis says. Hassabis, 46, has always been obsessed with intelligence: what it is, the possibilities it unlocks, and how to acquire more of it.


For a Second There, Someone Thought Using Taser Drones to Stop School Shootings Was a Good Idea

Slate

Armed police couldn't stop the shooters in Buffalo and in Uvalde. But perhaps a very small drone equipped with a Taser could. Specifically, Axon CEO Rick Smith said in a Thursday announcement, "non-lethal drones capable of incapacitating an active shooter in less than 60 seconds" (or so the press release goes), which would be stationed inside of schools. At the push of a panic button, a trained human pilot at a control center elsewhere in the country would launch a drone. With the help of a network of security cameras, they would try to target the drone's onboard Taser probes into the shooter's flesh, in the hope of keeping them down until police could arrive on the scene.


Taser drone project gets the ax

#artificialintelligence

Axon, the company best known for developing the Taser, said Monday it was dropping plans to develop a Taser-equipped drone after a majority of its ethics board resigned in protest. Axon's founder and CEO Rick Smith said the company's announcement last week -- which drew a rebuke from its artificial intelligence ethics board -- was intended to "initiate a conversation on this as a potential solution." Smith said the ensuing discussion "provided us with a deeper appreciation of the complex and important considerations" around the issue. As a result, "we are pausing work on this project and refocusing to further engage with key constituencies to fully explore the best path forward," he said. The development was first reported by Reuters.


Axon's AI Ethics Board resigns over plan to surveil schools with armed drones – TechCrunch

#artificialintelligence

Nine of 12 members of an ethics board appointed by Axon to advise its technology decisions have resigned, citing the company's plan to install Taser-equipped drones and pervasive surveillance at schools. "After several years of work, the company has fundamentally failed to embrace the values that we have tried to instill," the departing members write. "We have lost faith in Axon's ability to be a responsible partner." Axon (formerly Taser) has grown into a juggernaut of law enforcement software and hardware in recent years, providing not just the familiar and formerly eponymous electric weapons but body cameras and entire digital platforms for evidence management. Setting aside for now the inherent risks of privatizing such things, Axon has been rather surprisingly thoughtful with its tech, soliciting the advice of the communities these tools will be used in as well as the cops who will wear or wield them.


Axon halts plans to make a drone equipped with a Taser

Engadget

Axon has paused work on a project to build drones equipped with its Tasers. A majority of its artificial intelligence ethics board quit after the plan was announced last week. Nine of the 12 members said in a resignation letter that, just a few weeks ago, the board voted 8-4 to recommend that Axon shouldn't move forward with a pilot study for a Taser-equipped drone concept. "In that limited conception, the Taser-equipped drone was to be used only in situations in which it might avoid a police officer using a firearm, thereby potentially saving a life," the nine board members wrote. They noted Axon might decline to follow that recommendation and were working on a report regarding measures the company should have in place were it to move forward.


A firm proposes Taser-armed drones to stop school shootings

NPR Technology

This photo provided by Axon Enterprise depicts a conceptual design through a computer-generated rendering of a taser drone. Axon Enterprise, Inc. via AP hide caption This photo provided by Axon Enterprise depicts a conceptual design through a computer-generated rendering of a taser drone. Taser developer Axon said this week it is working to build drones armed with the electric stunning weapons that could fly in schools and "help prevent the next Uvalde, Sandy Hook, or Columbine." But its own technology advisers quickly panned the idea as a dangerous fantasy. The publicly traded company, which sells Tasers and police body cameras, floated the idea of a new police drone product last year to its artificial intelligence ethics board, a group of well-respected experts in technology, policing and privacy. Some of them expressed reservations about weaponizing drones in over-policed communities of color.


'Tech for Good' Needs a 'Good Tech' Approach

#artificialintelligence

Technology has always been a double-edged sword. While it's been a major force for progress, it has also been abused and caused harm. From steam power to Fordism, history shows that technology is neither good nor bad – by itself. It can, of course, be both, depending on how it's used. Telecommunications, specifically the internet, and more recently AI, which is estimated to contribute more than €11 billion to the global economy by 2030, are no different.


Why You Need an AI & Ethics Board

#artificialintelligence

Most businesses today have a great deal of data at their fingertips. They also have the tools to mine this information. But with this power comes responsibility. Before using data, technologists need to step back and evaluate the need. In today's data-driven, virtual age, it's not a question of whether you have the information, but if you should use it and how.


DeepMind co-founder Mustafa Suleyman departs Google

#artificialintelligence

DeepMind co-founder Mustafa Suleyman has departed Google after an eight-year stint at the company. Suleyman co-founded AI giant DeepMind alongside Demis Hassabis and Shane Legg in 2010 before it was acquired by Google in 2014 for $500 million. DeepMind has become somewhat of an AI darling and has repeatedly made headlines for creating neural networks that have beat human capabilities in a range of games. DeepMind's AlphaGo even beat Go world champion Lee Sedol in a five-game match. He left for Google in 2019 and was most recently the company's vice president of AI product management and policy.


Why do companies struggle with ethical artificial intelligence?

#artificialintelligence

Some of the world's biggest organizations, from the United Nations to Google to the U.S. Defense Department, proudly proclaim their bona fides when it comes to their ethical use of artificial intelligence. But for many other organizations, talking the talk is the easy part. A new report by a pair of Northeastern researchers discusses how articulating values, ethical concepts, and principles is just the first step in addressing AI and data ethics challenges. The harder work is moving from vague, abstract promises to substantive commitments that are action-guiding and measurable. "You see case after case where a company has these mission statements that they fail to live up to," says John Basl, an associate professor of philosophy and a co-author of the report.